slow feature analysis
We thank the reviewers for thoroughly commenting on our article; their comments give us the opportunity to improve
For Montezuma's Revenge, the average prediction error is In this case, the irrelevant intrinsic reward completely obscures the target goal. The less information is available about this step, the more uncertain the model and the higher the error. R4, in general we cannot guarantee that the prediction error is a measure of uncertainty. For an intuition about W-MSE representation and stochasticity, let's consider the noisy TV experiment: there is a TV in Atari and compare it with the best-performing methods such as NGU. To show how the seed affects the performance we included Figure 1 with training dynamics in the supplementary.
We thank the reviewers for thoroughly commenting on our article; their comments give us the opportunity to improve
For Montezuma's Revenge, the average prediction error is In this case, the irrelevant intrinsic reward completely obscures the target goal. The less information is available about this step, the more uncertain the model and the higher the error. R4, in general we cannot guarantee that the prediction error is a measure of uncertainty. For an intuition about W-MSE representation and stochasticity, let's consider the noisy TV experiment: there is a TV in Atari and compare it with the best-performing methods such as NGU. To show how the seed affects the performance we included Figure 1 with training dynamics in the supplementary.
Recursive State Inference for Linear PASFA
Recent probabilistic extensions to SFA learn effective representations for classification tasks. Notably, the Probabilistic Adaptive Slow Feature Analysis models the slow features as states in an ARMA process and estimate the model from the observations. However, there is a need to develop efficient methods to infer the states (slow features) from the observations and the model. In this paper, a recursive extension to the linear PASFA has been proposed. The proposed algorithm performs MMSE estimation of states evolving according to an ARMA process, given the observations and the model. Although current methods tackle this problem using Kalman filters after transforming the ARMA process into a state space model, the original states (or slow features) that form useful representations cannot be easily recovered. The proposed technique is evaluated on a synthetic dataset to demonstrate its correctness.
Slow Feature Analysis as Variational Inference Objective
Schüler, Merlin, Wiskott, Laurenz
Developing probabilistic perspectives on established mac hine learning algorithms can be a promising endeavor, as it casts methods originating from, for example, geometric or h euristic concepts into a well-understood framework that allows one to make explicit the assumptions and the dependen cies that are inherent in the resulting model. Many methods have been described in this shared language, even spanni ng the broad machine learning paradigms of unsupervised, supervised, and reinforcement learning. This makes it poss ible to compare methods, understand shortcomings, and propose extensions through a rich body of broad research. Furthermore, previous research on a specific method that was generalized in such a way might prove to be useful for the field of probabilistic modeling itself. After all, the mo st efficient methods for probabilistic inference under a mod el are rarely the most general and often leverage the model-spe cific structure (Kalman, 1960; Margossian & Blei, 2024). In this work, a soft variant of Slow Feature Analysis (SFA) (W iskott, 1998; Wiskott & Sejnowski, 2002) is derived using the language of probabilistic inference.
- Asia > Middle East > Jordan (0.04)
- North America > Canada > Alberta > Census Division No. 15 > Improvement District No. 9 > Banff (0.04)
- Europe > Germany (0.04)
- Asia > Japan > Honshū > Chūbu > Aichi Prefecture > Nagoya (0.04)
A Biologically Plausible Neural Network for Slow Feature Analysis
Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation.
Replacing supervised classification learning by Slow Feature Analysis in spiking neural networks
Stefan Klampfl, Wolfgang Maass
It is open how neurons in the brain are able to learn without supervision to discriminate between spatio-temporal firing patterns of presynaptic neurons. We show that a known unsupervised learning algorithm, Slow Feature Analysis (SFA), is able to acquire the classification capability of Fisher's Linear Discriminant (FLD), a powerful algorithm for supervised learning, if temporally adjacent samples are likely to be from the same class. We also demonstrate that it enables linear readout neurons of cortical microcircuits to learn the detection of repeating firing patterns within a stream of spike trains with the same firing statistics, as well as discrimination of spoken digits, in an unsupervised manner.
- Europe > Austria > Styria > Graz (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
Review for NeurIPS paper: A Biologically Plausible Neural Network for Slow Feature Analysis
Summary and Contributions: This paper produces a so-called biologically plausible neural network for slow-feature analysis. Biological plausibility here means that network learning is online and based on local synaptic learning rules. These online and locality requirements might lead to low computational overhead. While Foldiak, Wiskott, and many others have explored online local learning for SFA in the last thirty years, this paper attempts to relate SFA to a normative theory through an MDS objective.